Prior-Informed Uncertainty Modelling with Bayesian Polynomial Approximations
نویسندگان
چکیده
منابع مشابه
Efficient Nonparametric Bayesian Modelling with Sparse Gaussian Process Approximations
Sparse approximations to Bayesian inference for nonparametric Gaussian Process models scale linearly in the number of training points, allowing for the application of powerful kernel-based models to large datasets. We present a general framework based on the informative vector machine (IVM) (Lawrence et al., 2003) and show how the complete Bayesian task of inference and learning of free hyperpa...
متن کاملBayesian Fuzzy Hypothesis Testing with Imprecise Prior Distribution
This paper considers the testing of fuzzy hypotheses on the basis of a Bayesian approach. For this, using a notion of prior distribution with interval or fuzzy-valued parameters, we extend a concept of posterior probability of a fuzzy hypothesis. Some of its properties are also put into investigation. The feasibility and effectiveness of the proposed methods are also cla...
متن کاملHierarchical Bayesian Language Modelling for the Linguistically Informed
In this work I address the challenge of augmenting n-gram language models according to prior linguistic intuitions. I argue that the family of hierarchical Pitman-Yor language models is an attractive vehicle through which to address the problem, and demonstrate the approach by proposing a model for German compounds. In an empirical evaluation, the model outperforms the Kneser-Ney model in terms...
متن کاملBayesian Modelling of Outstanding Liabilities Incorporating Claim Count Uncertainty Bayesian Modelling of Outstanding Liabilities Incorporating Claim Count Uncertainty
This paper deals with the prediction of the amount of outstanding claims that an insurance company will pay in the near future. We consider various competing models using Bayesian theory and Markov chain Monte Carlo methods. Claim counts are used in order to add a further hierarchical stage in the usual log-normaland state-space models. By this way, we incorporate information from both the outs...
متن کاملBayesian Optimisation with Continuous Approximations
To present the theoretical results for GP-UCB, we begin by defining the Maximum Information Gain (MIG) which characterises the statistical difficulty of GP bandits. Definition 2. (Maximum Information Gain (Srinivas et al., 2010)) Let f ∼ GP(0, φX ). Consider any A ⊂ R and let A′ = {x1, . . . , xn} ⊂ A be a finite subset. Let fA′ , A′ ∈ R such that (fA′)i = f(xi) and ( A′)i ∼ N (0, η). Let yA′ =...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Social Science Research Network
سال: 2022
ISSN: ['1556-5068']
DOI: https://doi.org/10.2139/ssrn.4093619